# Mathematical programming optimization
Deepseek R1 0528 AWQ
MIT
The 4-bit AWQ quantized version of the DeepSeek-R1-0528 671B model, suitable for use on high-end GPU nodes
Large Language Model
Transformers

D
adamo1139
161
2
Qwen3 30B A3B GGUF
Apache-2.0
A large language model developed by Qwen, supporting a context length of 131,072 tokens, excelling in creative writing, role-playing, and multi-turn conversations.
Large Language Model
Q
lmstudio-community
77.06k
21
EXAONE Deep 2.4B GGUF
Other
EXAONE Deep is an efficient reasoning language model developed by LG AI Research with 2.4B parameters, excelling in reasoning tasks such as mathematics and programming.
Large Language Model Supports Multiple Languages
E
Mungert
968
3
Featured Recommended AI Models